72 research outputs found

    Preparing thermal states of quantum systems by dimension reduction

    Get PDF
    We present an algorithm that prepares thermal Gibbs states of one dimensional quantum systems on a quantum computer without any memory overhead, and in a time significantly shorter than other known alternatives. Specifically, the time complexity is dominated by the quantity N∥h∥/TN^{\|h\|/ T}, where NN is the size of the system, ∥h∥\|h\| is a bound on the operator norm of the local terms of the Hamiltonian (coupling energy), and TT is the temperature. Given other results on the complexity of thermalization, this overall scaling is likely optimal. For higher dimensions, our algorithm lowers the known scaling of the time complexity with the dimension of the system by one.Comment: Published version. Minor editorial changes, one new reference added. 4 pages, 1 figur

    Fast Quantum Methods for Optimization

    Full text link
    Discrete combinatorial optimization consists in finding the optimal configuration that minimizes a given discrete objective function. An interpretation of such a function as the energy of a classical system allows us to reduce the optimization problem into the preparation of a low-temperature thermal state of the system. Motivated by the quantum annealing method, we present three strategies to prepare the low-temperature state that exploit quantum mechanics in remarkable ways. We focus on implementations without uncontrolled errors induced by the environment. This allows us to rigorously prove a quantum advantage. The first strategy uses a classical-to-quantum mapping, where the equilibrium properties of a classical system in dd spatial dimensions can be determined from the ground state properties of a quantum system also in dd spatial dimensions. We show how such a ground state can be prepared by means of quantum annealing, including quantum adiabatic evolutions. This mapping also allows us to unveil some fundamental relations between simulated and quantum annealing. The second strategy builds upon the first one and introduces a technique called spectral gap amplification to reduce the time required to prepare the same quantum state adiabatically. If implemented on a quantum device that exploits quantum coherence, this strategy leads to a quadratic improvement in complexity over the well-known bound of the classical simulated annealing method. The third strategy is not purely adiabatic; instead, it exploits diabatic processes between the low-energy states of the corresponding quantum system. For some problems it results in an exponential speedup (in the oracle model) over the best classical algorithms.Comment: 15 pages (2 figures

    Eigenstate preparation by phase decoherence

    Get PDF
    A computation in adiabatic quantum computing is implemented by traversing a path of nondegenerate eigenstates of a continuous family of Hamiltonians. We introduce a method that traverses a discretized form of the path: at each step we apply the instantaneous Hamiltonian for a random time. The resulting decoherence approximates a projective measurement onto the desired eigenstate, achieving a version of the quantum Zeno effect. The average cost of our method is O(L^2/Δ) for constant error probability, where L is the length of the path of eigenstates and Δ is the minimum spectral gap of the Hamiltonian. For many cases of interest, L does not depend on Δ so the scaling of the cost with the gap is better than the one obtained in rigorous proofs of the adiabatic theorem. We give an example where this situation occurs

    Boundaries of quantum supremacy via random circuit sampling

    Full text link
    Google's recent quantum supremacy experiment heralded a transition point where quantum computing performed a computational task, random circuit sampling, that is beyond the practical reach of modern supercomputers. We examine the constraints of the observed quantum runtime advantage in an analytical extrapolation to circuits with a larger number of qubits and gates. Due to the exponential decrease of the experimental fidelity with the number of qubits and gates, we demonstrate for current fidelities a theoretical classical runtime advantage for circuits beyond a depth of 100, while quantum runtimes for cross-entropy benchmarking limit the region of a quantum advantage to around 300 qubits. However, the quantum runtime advantage boundary grows exponentially with reduced error rates, and our work highlights the importance of continued progress along this line. Extrapolations of measured error rates suggest that the limiting circuit size for which a computationally feasible quantum runtime advantage in cross-entropy benchmarking can be achieved approximately coincides with expectations for early implementations of the surface code and other quantum error correction methods. Thus the boundaries of quantum supremacy via random circuit sampling may fortuitously coincide with the advent of scalable, error corrected quantum computing in the near term.Comment: 8 pages, 3 figure
    • …
    corecore